19 research outputs found

    DSCo-NG: A Practical Language Modeling Approach for Time Series Classification

    Get PDF
    The abundance of time series data in various domains and their high dimensionality characteristic are challenging for harvesting useful information from them. To tackle storage and processing challenges, compression-based techniques have been proposed. Our previous work, Domain Series Corpus (DSCo), compresses time series into symbolic strings and takes advantage of language modeling techniques to extract from the training set knowledge about different classes. However, this approach was flawed in practice due to its excessive memory usage and the need for a priori knowledge about the dataset. In this paper we propose DSCo-NG, which reduces DSCo’s complexity and offers an efficient (linear time complexity and low memory footprint), accurate (performance comparable to approaches working on uncompressed data) and generic (so that it can be applied to various domains) approach for time series classification. Our confidence is backed with extensive experimental evaluation against publicly accessible datasets, which also offers insights on when DSCo-NG can be a better choice than others

    Enhanced multiclass SVM with thresholding fusion for speech-based emotion classification

    Get PDF
    As an essential approach to understanding human interactions, emotion classification is a vital component of behavioral studies as well as being important in the design of context-aware systems. Recent studies have shown that speech contains rich information about emotion, and numerous speech-based emotion classification methods have been proposed. However, the classification performance is still short of what is desired for the algorithms to be used in real systems. We present an emotion classification system using several one-against-all support vector machines with a thresholding fusion mechanism to combine the individual outputs, which provides the functionality to effectively increase the emotion classification accuracy at the expense of rejecting some samples as unclassified. Results show that the proposed system outperforms three state-of-the-art methods and that the thresholding fusion mechanism can effectively improve the emotion classification, which is important for applications that require very high accuracy but do not require that all samples be classified. We evaluate the system performance for several challenging scenarios including speaker-independent tests, tests on noisy speech signals, and tests using non-professional acted recordings, in order to demonstrate the performance of the system and the effectiveness of the thresholding fusion mechanism in real scenarios.Peer ReviewedPreprin

    Large Vocabulary On-Line Handwriting Recognition with Context Dependent Hidden Markov Models

    No full text
    . This paper presents a systematic investigation of the benefits resulting from the introduction of context-dependent modeling techniques in HMM-based large vocabulary handwriting recognition systems. It is shown how context-dependent units can be successfully introduced in complex handwriting systems by using so-called trigraphs, representing embedded characters within a word in its left and right context. With the introduction of suitable trigraphs we found relative error reduction rates up to 50% for writer-dependent recognition tasks with a 1000 word vocabulary. We could also verify these clear results for very large vocabularies with 30000 words, different writers and different unconstrained writing styles. 1 Introduction The use of Hidden Markov Models (HMMs) for cursive handwriting recognition has emerged during recent years as a serious alternative to traditional approaches. Resulting from these activities, several powerful HMM-based handwriting recognition systems ..

    Common Sense Knowledge for Handwritten Chinese Text Recognition

    No full text
    Compared to human intelligence, computers are far short of common sense knowledge which people normally acquire during the formative years of their lives. This paper investigates the effects of employing common sense knowledge as a new linguistic context in handwritten Chinese text recognition. Three methods are introduced to supplement the standard n-gram language model: embedding model, direct model, and an ensemble of these two. The embedding model uses semantic similarities from common sense knowledge to make the n-gram probabilities estimation more reliable, especially for the unseen n-grams in the training text corpus. The direct model, in turn, considers the linguistic context of the whole document to make up for the short context limit of the n-gram model. The three models are evaluated on a large unconstrained handwriting database, CASIA-HWDB, and the results show that the adoption of common sense knowledge yields improvements in recognition performance, despite the reduced concept list hereby employed
    corecore